1,077 research outputs found

    Double-descent curves in neural networks: a new perspective using Gaussian processes

    Full text link
    Double-descent curves in neural networks describe the phenomenon that the generalisation error initially descends with increasing parameters, then grows after reaching an optimal number of parameters which is less than the number of data points, but then descends again in the overparameterised regime. Here we use a neural network Gaussian process (NNGP) which maps exactly to a fully connected network (FCN) in the infinite width limit, combined with techniques from random matrix theory, to calculate this generalisation behaviour, with a particular focus on the overparameterised regime. An advantage of our NNGP approach is that the analytical calculations are easier to interpret. We argue that neural network generalization performance improves in the overparameterised regime precisely because that is where they converge to their equivalent Gaussian process

    Do deep neural networks have an inbuilt Occam's razor?

    Full text link
    The remarkable performance of overparameterized deep neural networks (DNNs) must arise from an interplay between network architecture, training algorithms, and structure in the data. To disentangle these three components, we apply a Bayesian picture, based on the functions expressed by a DNN, to supervised learning. The prior over functions is determined by the network, and is varied by exploiting a transition between ordered and chaotic regimes. For Boolean function classification, we approximate the likelihood using the error spectrum of functions on data. When combined with the prior, this accurately predicts the posterior, measured for DNNs trained with stochastic gradient descent. This analysis reveals that structured data, combined with an intrinsic Occam's razor-like inductive bias towards (Kolmogorov) simple functions that is strong enough to counteract the exponential growth of the number of functions with complexity, is a key to the success of DNNs

    Is SGD a Bayesian sampler? Well, almost

    Full text link
    Overparameterised deep neural networks (DNNs) are highly expressive and so can, in principle, generate almost any function that fits a training dataset with zero error. The vast majority of these functions will perform poorly on unseen data, and yet in practice DNNs often generalise remarkably well. This success suggests that a trained DNN must have a strong inductive bias towards functions with low generalisation error. Here we empirically investigate this inductive bias by calculating, for a range of architectures and datasets, the probability PSGD(f∣S)P_{SGD}(f\mid S) that an overparameterised DNN, trained with stochastic gradient descent (SGD) or one of its variants, converges on a function ff consistent with a training set SS. We also use Gaussian processes to estimate the Bayesian posterior probability PB(f∣S)P_B(f\mid S) that the DNN expresses ff upon random sampling of its parameters, conditioned on SS. Our main findings are that PSGD(f∣S)P_{SGD}(f\mid S) correlates remarkably well with PB(f∣S)P_B(f\mid S) and that PB(f∣S)P_B(f\mid S) is strongly biased towards low-error and low complexity functions. These results imply that strong inductive bias in the parameter-function map (which determines PB(f∣S)P_B(f\mid S)), rather than a special property of SGD, is the primary explanation for why DNNs generalise so well in the overparameterised regime. While our results suggest that the Bayesian posterior PB(f∣S)P_B(f\mid S) is the first order determinant of PSGD(f∣S)P_{SGD}(f\mid S), there remain second order differences that are sensitive to hyperparameter tuning. A function probability picture, based on PSGD(f∣S)P_{SGD}(f\mid S) and/or PB(f∣S)P_B(f\mid S), can shed new light on the way that variations in architecture or hyperparameter settings such as batch size, learning rate, and optimiser choice, affect DNN performance

    Neural networks are a priori biased towards Boolean functions with low entropy

    Full text link
    Understanding the inductive bias of neural networks is critical to explaining their ability to generalise. Here, for one of the simplest neural networks -- a single-layer perceptron with n input neurons, one output neuron, and no threshold bias term -- we prove that upon random initialisation of weights, the a priori probability P(t) that it represents a Boolean function that classifies t points in {0,1}^n as 1 has a remarkably simple form: P(t) = 2^{-n} for 0\leq t < 2^n. Since a perceptron can express far fewer Boolean functions with small or large values of t (low entropy) than with intermediate values of t (high entropy) there is, on average, a strong intrinsic a-priori bias towards individual functions with low entropy. Furthermore, within a class of functions with fixed t, we often observe a further intrinsic bias towards functions of lower complexity. Finally, we prove that, regardless of the distribution of inputs, the bias towards low entropy becomes monotonically stronger upon adding ReLU layers, and empirically show that increasing the variance of the bias term has a similar effect

    Role of the LPA1 receptor in mood and emotional regulation

    Get PDF
    Depression is a debilitating psychiatric condition characterized by anhedonia and behavioural despair among others symptoms. Despite the high prevalence and devastating impact of depression, underlying neurobiological mechanisms of mood disorders are still not well known. Regardless its complexity, central features of this disease can be modelled in rodents in order to better understand the potential mechanisms underlying. On the other hand, the lack of LPA1 receptor compromises the morphological and functional integrity of the limbic circuit and the neurogenesis in hippocampus, induces cognitive alterations on hippocampal-dependent tasks and dysfunctional coping of chronic stress, provokes exaggerated endocrine responses to emotional stimuli and impairs adaptation of the hypothalamic-pituitary-adrenal axis after chronic stress. Factors, which all have been related with depression. Here, we sought to establish the involvement of the LPA1 receptor in regulation of mood and emotion. To this end, in wild-type and maLPA1-null mice active coping responses to stress were examined using the forced swimming test (FST). To assess hedonic behaviour saccharine preference test and female urine sniffing test were used. Our data indicated that the absence of the LPA1 receptor significantly affected to coping strategies. Thus, while null mice displayed less immobility than wt in FST, exhibited more climbing and less swimming behaviour, responses that could be interpreted as an emotional over-reaction (i.e., a panic-like response) to stress situations. Concerning hedonic behaviour, the lack of the LPA1 receptor diminished saccharin preference and female urine sniffing time. Overall, these data supports the role of LPA1 receptor in mood and emotional regulation. Specially, the lack of this receptor induced emotional dysregulation and anhedonic behaviour, a core symptom of depression.Universidad de Málaga, Campus de Excelencia Andalucía Tech. Andalusian Regional Ministries of Economy, Innovation, Science and Employment (SEJ-1863; CTS643) and of Health (PI-0234-2013; Nicolas Monardes Programme), MINECO (PSI2013-44901-P) and National Institute of Health Carlos III (Sara Borrel)

    Variables del estudiante, del profesor y del contexto en la predicción del rendimiento académico en Biología: Análisis desde una perspectiva multinivel

    Get PDF
    En el presente estudio se analiza la contribución de variables del alumno y variables del contexto en la predicción del rendimiento académico en Bachillerato. Se han obtenido información de 988 estudiantes, de último curso de Bachillerato y de sus 57 profesores de Biología. Los datos fueron analizados desde una perspectiva multinivel. Los resultados indican que, de la variabilidad observada en el rendimiento en Biología, el 85.6% se debe a las variables de nivel de estudiante mientras que el 14.4% restante corresponde a las variables de nivel de clase. A nivel de estudiante. el rendimiento en Biología se encontró asociado con el enfoque de aprendizaje, con los conocimientos previos, con el absentismo escolar y con el nivel educativo de los padres. A nivel de clase, el rendimiento únicamente estuvo asociado con el enfoque de enseñanza del profesor. y no directamente, sino a través del enfoque de estudio del alumno
    • …
    corecore